Goto

Collaborating Authors

 bayespcn model


c13d5a10028586fdc15ee7da97b7563f-Supplemental-Conference.pdf

Neural Information Processing Systems

This section reports the recall performance of MHN and BayesPCN models on high query noise associativerecalltasks. Table5describes theCIFAR10 recallresults ofninestructurally identical BayesPCN models with four hidden layers of size 1024, a single particle, and GELU activations but with different values ofσW andσx. Onvisual inspection, we found that the model's auto-associative recall outputs for both observed and unobserved inputs became less blurry asmore datapoints were written into memory. Both GPCN and BayesPCN at the core are as much generative models as they are associative memories.





BayesPCN: A Continually Learnable Predictive Coding Associative Memory

Yoo, Jason, Wood, Frank

arXiv.org Artificial Intelligence

Associative memory plays an important role in human intelligence and its mechanisms have been linked to attention in machine learning. While the machine learning community's interest in associative memories has recently been rekindled, most work has focused on memory recall ($read$) over memory learning ($write$). In this paper, we present BayesPCN, a hierarchical associative memory capable of performing continual one-shot memory writes without meta-learning. Moreover, BayesPCN is able to gradually forget past observations ($forget$) to free its memory. Experiments show that BayesPCN can recall corrupted i.i.d. high-dimensional data observed hundreds to a thousand ``timesteps'' ago without a large drop in recall ability compared to the state-of-the-art offline-learned parametric memory models.